Our project had two main goals. First, the robot should traverse the course, creating and saving a map of all the walls in the course (mapping). Second, the robot should use the map and a given set of checkpoint coordinates to find the fastest route through the course that hit all the checkpoints and no walls (path optimization).

For the mapping part, we took advantage of the fact that the course was already on a 12x12 tile grid. The robot follows a right wall around the course using two IR sensors located on the front and right of the robot (with the right sensor angled at ~45°) and staying on the gridlines between tiles. It “sees” the two tiles to its front-left and front-right at each intersection, checking the wall configuration for those two tiles and updating the mapping array. We assumed that there were no obstacles (other than walls) in the course and that any course configuration we might encounter could be completely mapped by right wall following. Instead of using the LADAR and the gyro to determine the robot’s position in the course, we relied entirely on the encoders and adjustments made during right wall following. The following diagram explains the right wall following code contained in “user_ladar.c”.

Note that “update theta” means to change theta by  as the robot completes the left or right turn. “Update location” means change the variables containing the robot x and y positions to the nearest integer values. This action is performed at every turn since all turns should occur on grid points, and it corrects for any error in the x and y position as calculated by the encoder values.

The “don’t follow” sequence begins when the robot is following a right wall and loses it. This occurs when the robot needs to make a right turn—it will drive past the wall, lose track of the wall with the right IR sensor, and enter this code sequence. This code takes care of two different possible configurations: 1) the robot needs to make one right turn/the wall is L-shaped, or 2) the robot needs to make two right turns/there is a wall sticking out into the course.

One point to note is that this mapping method could be used without physical tiles or gridlines. We used a 12x12 array of tiles to map wall configurations, but we could have chosen any number of “tiles” and had virtual gridlines. The physical tiles just made it easy to check that the program was updating correctly. For better resolution, we could have chosen a much larger array size and divided the course into smaller sections.

The ladar function works by using the current robot position to know when it needs to initiate a scan. If the robot is within 30% of the tile gridlines intersection (in both x and y), the ladar will scan and pass the information to a function. The function looks at the output of the ladar scan and checks the ladar distance reading at  of the robot center (the front walls) and  of robot center (the side walls). By looking in these two locations, the robot can “see” the wall configuration on any given tile. Since the robot occupies a physical space, it is assumed that where ever the robot is located, there are no walls.

As the robot travels the course, it will have to change direction. Therefore, all of the wall readings must be stored with respect to an absolute reference frame, and not the robot’s reference frame. To accomplish this, data was saved in a 12x12 array, where a single number corresponded to a given tile configuration. When the array was generated, all of the entries were initialized to zero. If the robot gathered information about a tile, the array entry would be changed to a number between 1 and 9. Once an array entry is changed to a non-zero number, there is a check that prevents overwriting of the array. The figure below shows how the numbering system is related to the wall configuration.

A sample output of the robot moving through a course is given below. Note that the robot requires the walls to be positioned on gridlines. It also requires at least two tiles for each passageway, as the robot is larger than one tile, and would not be able to fit through a smaller passage. Also note that the upper right corner shows that the robot is able to successfully navigate a U-turn.

3

4

4

4

4

5

3

4

4

5

3

5

2

1

1

1

1

6

2

1

1

6

2

6

2

1

1

6

2

1

1

1

1

6

2

6

2

1

1

6

9

8

8

8

1

6

2

6

2

1

1

6

3

4

4

4

1

1

1

6

9

8

8

7

2

1

1

1

1

1

1

6

3

4

4

5

2

6

2

1

1

1

1

6

2

1

1

6

2

6

2

1

8

8

8

7

2

1

1

1

1

1

1

1

4

4

4

5

2

1

1

1

1

1

1

1

1

1

1

6

2

1

1

6

2

1

1

6

2

1

1

6

9

8

8

7

9

8

8

7

9

8

8

7

 

The path optimization code uses a modified wavefront algorithm (http://www.societyofrobots.com/programming_wavefront.shtml) in order to find the shortest path from the robot location to several checkpoints. An array that includes the map characteristics such as the walls, the robot position, and the checkpoint locations provides the appropriate parameters for the wavefront algorithm to determine the shortest path. This code takes into account one or several checkpoints and takes the robot through all of them while stopping at the final destination.

The project consisted of two independent programs, which needed to be interfaced together, to achieve proper communication. The ladar mapping program and path_optimization code run separately, and each has its own parameters, map arrays, and coordinate systems. The linux code from lab was repurposed and expanded by adding an option to write the array from the mapping program into a formatted text file. This file could then be read by the same linux code to read into path_optimization. Checkpoints could also be input by the user when starting the path. Several initialization routines were added into the two programs to convert between the separate data structures.